12 research outputs found

    Finding and Analyzing Evil Cities on the Internet

    Get PDF
    IP Geolocation is used to determine the geographical location of Internet users based on their IP addresses. When it comes to security, most of the traditional geolocation analysis is performed at country level. Since countries usually have many cities/towns of different sizes, it is expected that they behave differently when performing malicious activities. Therefore, in this paper we refine geolocation analysis to the city level. The idea is to find the most dangerous cities on the Internet and observe how they behave. This information can then be used by security analysts to improve their methods and tools. To perform this analysis, we have obtained and evaluated data from a real-world honeypot network of 125 hosts and from production e-mail servers

    Evaluating Third-Party Bad Neighborhood Blacklists for Spam Detection

    Get PDF
    The distribution of malicious hosts over the IP address space is far from being uniform. In fact, malicious hosts tend to be concentrate in certain portions of the IP address space, forming the so-called Bad Neighborhoods. This phenomenon has been previously exploited to filter Spam by means of Bad Neighborhood blacklists. In this paper, we evaluate how much a network administrator can rely upon different Bad Neighborhood blacklists generated by third-party sources to fight Spam. One could expect that Bad Neighborhood blacklists generated from different sources contain, to a varying degree, disjoint sets of entries. Therefore, we investigate (i) how specific a blacklist is to its source, and (ii) whether different blacklists can be interchangeably used to protect a target from Spam. We analyze five Bad Neighborhood blacklists generated from real-world measurements and study their effectiveness in protecting three production mail servers from Spam. Our findings lead to several operational considerations on how a network administrator could best benefit from Bad Neighborhood-based Spam filtering

    The concept of embedded values and the example of internet security

    Get PDF
    Many current technological devices used in our everyday lives confront us with a host of new ethical issues to be addressed. Facebook, Twitter, or smart phones are all examples of technologies used quite pervasively which call into question culturally significant values like privacy, among others. The embedded values concept presents the compelling idea that engineers, scientists and designers can create technologies which intentionally enhance cultural and societal values while at the same time minimizing threats to other values. Although the embedded values concept (and the resulting design theories that follow) is of great utility, it remains unclear how to utilize this concept in practice. Added to this is the difficulty of utilizing this concept when engaged in fundamental research or experiments rather than in the creation of a commercial product. This paper presents a novel approach for collaboration between an ethicist and a computer engineering PhD researcher working on the Internet Bad Neighborhoods concept for spam filtering. The results proved beneficial in terms of both the utility of the embedded values concept as well as a strengthening of the engineering PhD researcher’s work

    When Parents and Children Disagree:Diving into DNS Delegation Inconsistency

    Get PDF
    The Domain Name System (DNS) is a hierarchical, decentralized, and distributed database. A key mechanism that enables the DNS to be hierarchical and distributed is delegation [7] of responsibility from parent to child zones—typically managed by different entities. RFC1034 [12] states that authoritative nameserver (NS) records at both parent and child should be “consistent and remain so”, but we find inconsistencies for over 13M second-level domains. We classify the type of inconsistencies we observe, and the behavior of resolvers in the face of such inconsistencies, using RIPE Atlas to probe our experimental domain configured for different scenarios. Our results underline the risk such inconsistencies pose to the availability of misconfigured domains

    Taking on internet bad neighborhoods

    Get PDF
    It’s known fact that malicious IP addresses are not evenly distributed over the IP addressing space. In this paper, we frame networks concentrating malicious addresses as bad neighborhoods. We propose a formal definition and show this concentration can be used to predict future attacks (new spamming sources, in our case), and propose an algorithm to aggregate individual IP addresses can bigger neighborhoods. Moreover, we show how bad neighborhoods are specific according to the exploited application (e.g., spam, ssh) and how the performance of different blacklist sources impacts lightweight spam filtering algorithms

    Optical Switching Impact on TCP Throughput Limited by TCP Buffers

    Get PDF
    In this paper, we observe the performance of TCP throughput when self-management is employed to automatically move flows from the IP level to established connections at the optical level. This move can result in many packets arriving out of order at the receiver and even being discarded, since some of these packets would be transferred more quickly over an optical connection than the other packets transferred over an IP path. To the best of our knowledge, so far there is no work in the literature that evaluates the TCP throughput when flows undergo such conditions. Within this context, this paper presents an analysis of the impact of optical switching on the TCP CUBIC throughput when the throughput itself is limited by the TCP buffer sizes

    Clouding up the Internet: How centralized is DNS traffic becoming?

    Get PDF
    Concern has been mounting about Internet centralization over the few last years - consolidation of traffic/users/infrastructure into the hands of a few market players. We measure DNS and computing centralization by analyzing DNS traffic collected at a DNS root server and two country-code top-level domains (ccTLDs) - one in Europe and the other in Oceania - and show evidence of concentration. More than 30% of all queries to both ccTLDs are sent from 5 large cloud providers. We compare the clouds resolver infrastructure and highlight a discrepancy in behavior: some cloud providers heavily employ IPv6, DNSSEC, and DNS over TCP, while others simply use unsecured DNS over UDP over IPv4. We show one positive side to centralization: once a cloud provider deploys a security feature - such as QNAME minimization - it quickly benefits a large number of users

    Into the DDoS maelstrom: A longitudinal study of a scrubbing service

    Get PDF
    Distributed denial-of-service (DDoS) attacks are nowadays easy and cheap to carry out, and have become bigger and more frequent over the last years. Cloud-based scrubbers have emerged as a service which victims can hire on demand to fend off attacks. There are many industry players, but not much insights into their operations. This work unravels for the first time the inner workings of a DDoS scrubber - NaWas - a non-profit scrubber in the Netherlands. We analyze 1800+ DDoS attacks spanning over a period of 22 months, and show that while most attacks are not very large, they are still large enough to disrupt services and likely to disturb links. We estimate the collateral damage incurred by DDoS attacks, and demonstrate that the number of victims of is at least quadratically larger (IP2) than the targeted addresses. Last, by correlating attacks metadata with authoritative DNS traffic, we show that DDoS attacks leave fingerprints on DNS traffic, which, in turn can be used to detect DDoS attacks at early stages, even if attackers attempt to deceive DNS based detection

    Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event

    Get PDF
    Distributed Denial-of-Service (DDoS) attacks continue to be a major threat in the Internet today. DDoS attacks over- whelm target services with requests or other traffic, causing requests from legitimate users to be shut out. A common defense against DDoS is to replicate the service in multiple physical locations or sites. If all sites announce a common IP address, BGP will associate users around the Internet with a nearby site, defining the catchment of that site. Anycast ad- dresses DDoS both by increasing capacity to the aggregate of many sites, and allowing each catchment to contain attack traffic leaving other sites unaffected. IP anycast is widely used for commercial CDNs and essential infrastructure such as DNS, but there is little evaluation of anycast under stress. This paper provides the first evaluation of several anycast services under stress with public data. Our subject is the Internet’s Root Domain Name Service, made up of 13 inde- pendently designed services (“letters‿, 11 with IP anycast) running at more than 500 sites. Many of these services were stressed by sustained traffic at 100× normal load on Nov. 30 and Dec. 1, 2015. We use public data for most of our anal- ysis to examine how different services respond to the these events. We see how different anycast deployments respond to stress, and identify two policies: sites may absorb attack traffic, containing the damage but reducing service to some users, or they may withdraw routes to shift both good and bad traffic to other sites. We study how these deployments policies result in different levels of service to different users. We also show evidence of collateral damage on other services located near the attacks
    corecore